36 research outputs found

    A neurobiological and computational analysis of target discrimination in visual clutter by the insect visual system.

    Get PDF
    Some insects have the capability to detect and track small moving objects, often against cluttered moving backgrounds. Determining how this task is performed is an intriguing challenge, both from a physiological and computational perspective. Previous research has characterized higher-order neurons within the fly brain known as 'small target motion detectors‘ (STMD) that respond selectively to targets, even within complex moving surrounds. Interestingly, these cells still respond robustly when the velocity of the target is matched to the velocity of the background (i.e. with no relative motion cues). We performed intracellular recordings from intermediate-order neurons in the fly visual system (the medulla). These full-wave rectifying, transient cells (RTC) reveal independent adaptation to luminance changes of opposite signs (suggesting separate 'on‘ and 'off‘ channels) and fast adaptive temporal mechanisms (as seen in some previously described cell types). We show, via electrophysiological experiments, that the RTC is temporally responsive to rapidly changing stimuli and is well suited to serving an important function in a proposed target-detecting pathway. To model this target discrimination, we use high dynamic range (HDR) natural images to represent 'real-world‘ luminance values that serve as inputs to a biomimetic representation of photoreceptor processing. Adaptive spatiotemporal high-pass filtering (1st-order interneurons) shapes the transient 'edge-like‘ responses, useful for feature discrimination. Following this, a model for the RTC implements a nonlinear facilitation between the rapidly adapting, and independent polarity contrast channels, each with centre-surround antagonism. The recombination of the channels results in increased discrimination of small targets, of approximately the size of a single pixel, without the need for relative motion cues. This method of feature discrimination contrasts with traditional target and background motion-field computations. We show that our RTC-based target detection model is well matched to properties described for the higher-order STMD neurons, such as contrast sensitivity, height tuning and velocity tuning. The model output shows that the spatiotemporal profile of small targets is sufficiently rare within natural scene imagery to allow our highly nonlinear 'matched filter‘ to successfully detect many targets from the background. The model produces robust target discrimination across a biologically plausible range of target sizes and a range of velocities. We show that the model for small target motion detection is highly correlated to the velocity of the stimulus but not other background statistics, such as local brightness or local contrast, which normally influence target detection tasks. From an engineering perspective, we examine model elaborations for improved target discrimination via inhibitory interactions from correlation-type motion detectors, using a form of antagonism between our feature correlator and the more typical motion correlator. We also observe that a changing optimal threshold is highly correlated to the value of observer ego-motion. We present an elaborated target detection model that allows for implementation of a static optimal threshold, by scaling the target discrimination mechanism with a model-derived velocity estimation of ego-motion. Finally, we investigate the physiological relevance of this target discrimination model. We show that via very subtle image manipulation of the visual stimulus, our model accurately predicts dramatic changes in observed electrophysiological responses from STMD neurons.Thesis (Ph.D.) - University of Adelaide, School of Molecular and Biomedical Science, 200

    A Model for the Detection of Moving Targets in Visual Clutter Inspired by Insect Physiology

    Get PDF
    We present a computational model for target discrimination based on intracellular recordings from neurons in the fly visual system. Determining how insects detect and track small moving features, often against cluttered moving backgrounds, is an intriguing challenge, both from a physiological and a computational perspective. Previous research has characterized higher-order neurons within the fly brain, known as ‘small target motion detectors’ (STMD), that respond robustly to moving features, even when the velocity of the target is matched to the background (i.e. with no relative motion cues). We recorded from intermediate-order neurons in the fly visual system that are well suited as a component along the target detection pathway. This full-wave rectifying, transient cell (RTC) reveals independent adaptation to luminance changes of opposite signs (suggesting separate ON and OFF channels) and fast adaptive temporal mechanisms, similar to other cell types previously described. From this physiological data we have created a numerical model for target discrimination. This model includes nonlinear filtering based on the fly optics, the photoreceptors, the 1st order interneurons (Large Monopolar Cells), and the newly derived parameters for the RTC. We show that our RTC-based target detection model is well matched to properties described for the STMDs, such as contrast sensitivity, height tuning and velocity tuning. The model output shows that the spatiotemporal profile of small targets is sufficiently rare within natural scene imagery to allow our highly nonlinear ‘matched filter’ to successfully detect most targets from the background. Importantly, this model can explain this type of feature discrimination without the need for relative motion cues

    Robustness and real-time performance of an insect inspired target tracking algorithm under natural conditions

    No full text
    Many computer vision tasks require the implementation of robust and efficient target tracking algorithms. Furthermore, in robotic applications these algorithms must perform whilst on a moving platform (ego motion). Despite the increase in computational processing power, many engineering algorithms are still challenged by real-Time applications. In contrast, lightweight and low-power flying insects, such as dragonflies, can readily chase prey and mates within cluttered natural environments, deftly selecting their target amidst distractors (swarms). In our laboratory, we record from 'target-detecting' neurons in the dragonfly brain that underlie this pursuit behavior. We recently developed a closed-loop target detection and tracking algorithm based on key properties of these neurons. Here we test our insect-inspired tracking model in open-loop against a set of naturalistic sequences and compare its efficacy and efficiency with other state-of-The-Art engineering models. In terms of tracking robustness, our model performs similarly to many of these trackers, yet is at least 3 times more efficient in terms of processing speed

    Visual acuity of the honey bee retina and the limits for feature detection

    No full text
    Visual abilities of the honey bee have been studied for more than 100 years, recently revealing unexpectedly sophisticated cognitive skills rivalling those of vertebrates. However, the physiological limits of the honey bee eye have been largely unaddressed and only studied in an unnatural, dark state. Using a bright display and intracellular recordings, we here systematically investigated the angular sensitivity across the light adapted eye of honey bee foragers. Angular sensitivity is a measure of photoreceptor receptive field size and thus small values indicate higher visual acuity. Our recordings reveal a fronto-ventral acute zone in which angular sensitivity falls below 1.9°, some 30% smaller than previously reported. By measuring receptor noise and responses to moving dark objects, we also obtained direct measures of the smallest features detectable by the retina. In the frontal eye, single photoreceptors respond to objects as small as 0.6° × 0.6°, with >99% reliability. This indicates that honey bee foragers possess significantly better resolution than previously reported or estimated behaviourally, and commonly assumed in modelling of bee acuity

    ACRA_files_2018_and_2019.zip

    No full text
    Scripts and supporting files underlying: J. V. James, B. S. Cazzolato, S. Grainger, D. C. O’Carroll, and S. D. Wiederman, “An insect-inspired detection algorithm for aerial drone detection,” in Australasian Conference on Robotics and Automation (ACRA), Australian Robotics and Automation Association, 2018, pp. 1–9. [Online]. Available: https://ssl.linklings.net/conferences/acra/acra2018_proceedings/views/includes/files/pap120s1-file1.pdf and J. V. James, B. S. Cazzolato, S. Grainger, and S. D. Wiederman, “A probabilistic tracker for a bio-inspired target detection algorithm,” in Australasian Conference on Robotics and Automation (ACRA), Australian Robotics and Automation Association, 2019, pp. 1–10. [Online]. Available: https://ssl.linklings.net/conferences/acra/acra2019proceedings/views/includes/files/pap149s1-file1.pdf</p

    Properties of neuronal facilitation that improve target tracking in natural pursuit simulations.

    No full text
    Although flying insects have limited visual acuity (approx. 1°) and relatively small brains, many species pursue tiny targets against cluttered backgrounds with high success. Our previous computational model, inspired by electrophysiological recordings from insect 'small target motion detector' (STMD) neurons, did not account for several key properties described from the biological system. These include the recent observations of response 'facilitation' (a slow build-up of response to targets that move on long, continuous trajectories) and 'selective attention', a competitive mechanism that selects one target from alternatives. Here, we present an elaborated STMD-inspired model, implemented in a closed loop target-tracking system that uses an active saccadic gaze fixation strategy inspired by insect pursuit. We test this system against heavily cluttered natural scenes. Inclusion of facilitation not only substantially improves success for even short-duration pursuits, but it also enhances the ability to 'attend' to one target in the presence of distracters. Our model predicts optimal facilitation parameters that are static in space and dynamic in time, changing with respect to the amount of background clutter and the intended purpose of the pursuit. Our results provide insights into insect neurophysiology and show the potential of this algorithm for implementation in artificial visual systems and robotic applications

    Performance assessment of an insect-inspired target tracking model in background clutter

    No full text
    Biological visual systems provide excellent examples of robust target detection and tracking mechanisms capable of performing in a wide range of environments. Consequently, they have been sources of inspiration for many artificial vision algorithms. However, testing the robustness of target detection and tracking algorithms is a challenging task due to the diversity of environments for applications of these algorithms. Correlation between image quality metrics and model performance is one way to deal with this problem. Previously we developed a target detection model inspired by physiology of insects and implemented it in a closed loop target tracking algorithm. In the current paper we vary the kinetics of a salience-enhancing element of our algorithm and test its effect on the robustness of our model against different natural images to find the relationship between model performance and background clutter

    Performance of an insect-inspired target tracker in natural conditions

    No full text
    Robust and efficient target-tracking algorithms embedded on moving platforms, are a requirement for many computer vision and robotic applications. However, deployment of a real-time system is challenging, even with the computational power of modern hardware. As inspiration, we look to biological lightweight solutions-lightweight and low-powered flying insects. For example, dragonflies pursue prey and mates within cluttered, natural environments, deftly selecting their target amidst swarms. In our laboratory, we study the physiology and morphology of dragonfly 'small target motion detector' neurons likely to underlie this pursuit behaviour. Here we describe our insect-inspired tracking model derived from these data and compare its efficacy and efficiency with state-of-the-art engineering models. For model inputs, we use both publicly available video sequences, as well as our own task-specific dataset (small targets embedded within natural scenes). In the context of the tracking problem, we describe differences in object statistics within the video sequences. For the general dataset, our model often locks on to small components of larger objects, tracking these moving features. When input imagery includes small moving targets, for which our highly nonlinear filtering is matched, the robustness outperforms state-of-the-art trackers. In all scenarios, our insect-inspired tracker runs at least twice the speed of the comparison algorithms

    Salience invariance with divisive normalization in higher-order insect neurons

    No full text
    We present a biologically inspired model for estimating the position of a moving target that is invariant to the target's contrast. Our model produces a monotonic relationship between position and output activity using a divisive normalization between the 'receptive fields' of two overlapping, wide-field, small-target motion detector (STMD) neurons. These visual neurons found in flying insects, likely underlie the impressive ability to pursue prey within cluttered environments. Individual STMD responses confound the properties of target contrast, size, velocity and position. Inspired by results from STMD recordings we developed a model using a division operation to overcome the inherent positional ambiguities of integrative neurons. We used genetic algorithms to determine the plausibility of such an operation arising and existing over multiple generations. This method allows the lost information to be recovered without needing additional neuronal pathways
    corecore